4 research outputs found
PIM: Video Coding using Perceptual Importance Maps
Human perception is at the core of lossy video compression, with numerous
approaches developed for perceptual quality assessment and improvement over the
past two decades. In the determination of perceptual quality, different
spatio-temporal regions of the video differ in their relative importance to the
human viewer. However, since it is challenging to infer or even collect such
fine-grained information, it is often not used during compression beyond
low-level heuristics. We present a framework which facilitates research into
fine-grained subjective importance in compressed videos, which we then utilize
to improve the rate-distortion performance of an existing video codec (x264).
The contributions of this work are threefold: (1) we introduce a web-tool which
allows scalable collection of fine-grained perceptual importance, by having
users interactively paint spatio-temporal maps over encoded videos; (2) we use
this tool to collect a dataset with 178 videos with a total of 14443 frames of
human annotated spatio-temporal importance maps over the videos; and (3) we use
our curated dataset to train a lightweight machine learning model which can
predict these spatio-temporal importance regions. We demonstrate via a
subjective study that encoding the videos in our dataset while taking into
account the importance maps leads to higher perceptual quality at the same
bitrate, with the videos encoded with importance maps preferred
over the baseline videos. Similarly, we show that for the 18 videos in test
set, the importance maps predicted by our model lead to higher perceptual
quality videos, preferred over the baseline at the same bitrate
Computational integrity with a public random string from quasi-linear PCPs
A party running a computation remotely may benefit from misreporting its output, say, to lower its tax. Cryptographic protocols that detect and prevent such falsities hold the promise to enhance the security of decentralized systems with stringent computational integrity requirements, like Bitcoin [Nak09]. To gain public trust it is imperative to use publicly verifiable protocols that have no “backdoors” and which can be set up using only a short public random string. Probabilistically Checkable Proof (PCP) systems [BFL90, BFLS91, AS98, ALM + 98] can be used to construct astonishingly efficient protocols [Kil92, Mic00] of this nature but some of the main components of such systems — proof composition [AS98] and low-degree testing via PCPs of Proximity (PCPPs) [BGH + 05, DR06] — have been considered efficient only asymptotically, for unrealistically large computations; recent cryptographic alternatives [PGHR13, BCG + 13a] suffer from a non-public setup phase.
This work introduces SCI, the first implementation of a scalable PCP system (that uses both PCPPs and proof composition). We used SCI to prove correctness of executions of up to cycles of a simple processor (Figure 1) and calculated (Figure 2) its break-even point [SVP + 12, SMBW12]. The significance of our findings is two-fold: (i) it marks the transition of core PCP techniques (like proof composition and PCPs of Proximity) from mathematical theory to practical system engineering, and (ii) the thresholds obtained are nearly achievable and hence show that PCP-supported computational integrity is closer to reality than previously assumed